443 research outputs found

    Hardness of Vertex Deletion and Project Scheduling

    Full text link
    Assuming the Unique Games Conjecture, we show strong inapproximability results for two natural vertex deletion problems on directed graphs: for any integer k2k\geq 2 and arbitrary small ϵ>0\epsilon > 0, the Feedback Vertex Set problem and the DAG Vertex Deletion problem are inapproximable within a factor kϵk-\epsilon even on graphs where the vertices can be almost partitioned into kk solutions. This gives a more structured and therefore stronger UGC-based hardness result for the Feedback Vertex Set problem that is also simpler (albeit using the "It Ain't Over Till It's Over" theorem) than the previous hardness result. In comparison to the classical Feedback Vertex Set problem, the DAG Vertex Deletion problem has received little attention and, although we think it is a natural and interesting problem, the main motivation for our inapproximability result stems from its relationship with the classical Discrete Time-Cost Tradeoff Problem. More specifically, our results imply that the deadline version is NP-hard to approximate within any constant assuming the Unique Games Conjecture. This explains the difficulty in obtaining good approximation algorithms for that problem and further motivates previous alternative approaches such as bicriteria approximations.Comment: 18 pages, 1 figur

    Approximating ATSP by Relaxing Connectivity

    Full text link
    The standard LP relaxation of the asymmetric traveling salesman problem has been conjectured to have a constant integrality gap in the metric case. We prove this conjecture when restricted to shortest path metrics of node-weighted digraphs. Our arguments are constructive and give a constant factor approximation algorithm for these metrics. We remark that the considered case is more general than the directed analog of the special case of the symmetric traveling salesman problem for which there were recent improvements on Christofides' algorithm. The main idea of our approach is to first consider an easier problem obtained by significantly relaxing the general connectivity requirements into local connectivity conditions. For this relaxed problem, it is quite easy to give an algorithm with a guarantee of 3 on node-weighted shortest path metrics. More surprisingly, we then show that any algorithm (irrespective of the metric) for the relaxed problem can be turned into an algorithm for the asymmetric traveling salesman problem by only losing a small constant factor in the performance guarantee. This leaves open the intriguing task of designing a "good" algorithm for the relaxed problem on general metrics.Comment: 25 pages, 2 figures, fixed some typos in previous versio

    The Matching Problem in General Graphs is in Quasi-NC

    Full text link
    We show that the perfect matching problem in general graphs is in Quasi-NC. That is, we give a deterministic parallel algorithm which runs in O(log3n)O(\log^3 n) time on nO(log2n)n^{O(\log^2 n)} processors. The result is obtained by a derandomization of the Isolation Lemma for perfect matchings, which was introduced in the classic paper by Mulmuley, Vazirani and Vazirani [1987] to obtain a Randomized NC algorithm. Our proof extends the framework of Fenner, Gurjar and Thierauf [2016], who proved the analogous result in the special case of bipartite graphs. Compared to that setting, several new ingredients are needed due to the significantly more complex structure of perfect matchings in general graphs. In particular, our proof heavily relies on the laminar structure of the faces of the perfect matching polytope.Comment: Accepted to FOCS 2017 (58th Annual IEEE Symposium on Foundations of Computer Science

    Approximating kk-Median via Pseudo-Approximation

    Full text link
    We present a novel approximation algorithm for kk-median that achieves an approximation guarantee of 1+3+ϵ1+\sqrt{3}+\epsilon, improving upon the decade-old ratio of 3+ϵ3+\epsilon. Our approach is based on two components, each of which, we believe, is of independent interest. First, we show that in order to give an α\alpha-approximation algorithm for kk-median, it is sufficient to give a \emph{pseudo-approximation algorithm} that finds an α\alpha-approximate solution by opening k+O(1)k+O(1) facilities. This is a rather surprising result as there exist instances for which opening k+1k+1 facilities may lead to a significant smaller cost than if only kk facilities were opened. Second, we give such a pseudo-approximation algorithm with α=1+3+ϵ\alpha= 1+\sqrt{3}+\epsilon. Prior to our work, it was not even known whether opening k+o(k)k + o(k) facilities would help improve the approximation ratio.Comment: 18 page

    New Notions and Constructions of Sparsification for Graphs and Hypergraphs

    Get PDF
    A sparsifier of a graph GG (Bencz\'ur and Karger; Spielman and Teng) is a sparse weighted subgraph G~\tilde G that approximately retains the cut structure of GG. For general graphs, non-trivial sparsification is possible only by using weighted graphs in which different edges have different weights. Even for graphs that admit unweighted sparsifiers, there are no known polynomial time algorithms that find such unweighted sparsifiers. We study a weaker notion of sparsification suggested by Oveis Gharan, in which the number of edges in each cut (S,Sˉ)(S,\bar S) is not approximated within a multiplicative factor (1+ϵ)(1+\epsilon), but is, instead, approximated up to an additive term bounded by ϵ\epsilon times dS+vol(S)d\cdot |S| + \text{vol}(S), where dd is the average degree, and vol(S)\text{vol}(S) is the sum of the degrees of the vertices in SS. We provide a probabilistic polynomial time construction of such sparsifiers for every graph, and our sparsifiers have a near-optimal number of edges O(ϵ2npolylog(1/ϵ))O(\epsilon^{-2} n {\rm polylog}(1/\epsilon)). We also provide a deterministic polynomial time construction that constructs sparsifiers with a weaker property having the optimal number of edges O(ϵ2n)O(\epsilon^{-2} n). Our constructions also satisfy a spectral version of the ``additive sparsification'' property. Our construction of ``additive sparsifiers'' with Oϵ(n)O_\epsilon (n) edges also works for hypergraphs, and provides the first non-trivial notion of sparsification for hypergraphs achievable with O(n)O(n) hyperedges when ϵ\epsilon and the rank rr of the hyperedges are constant. Finally, we provide a new construction of spectral hypergraph sparsifiers, according to the standard definition, with poly(ϵ1,r)nlogn{\rm poly}(\epsilon^{-1},r)\cdot n\log n hyperedges, improving over the previous spectral construction (Soma and Yoshida) that used O~(n3)\tilde O(n^3) hyperedges even for constant rr and ϵ\epsilon.Comment: 31 page

    Online Contention Resolution Schemes

    Full text link
    We introduce a new rounding technique designed for online optimization problems, which is related to contention resolution schemes, a technique initially introduced in the context of submodular function maximization. Our rounding technique, which we call online contention resolution schemes (OCRSs), is applicable to many online selection problems, including Bayesian online selection, oblivious posted pricing mechanisms, and stochastic probing models. It allows for handling a wide set of constraints, and shares many strong properties of offline contention resolution schemes. In particular, OCRSs for different constraint families can be combined to obtain an OCRS for their intersection. Moreover, we can approximately maximize submodular functions in the online settings we consider. We, thus, get a broadly applicable framework for several online selection problems, which improves on previous approaches in terms of the types of constraints that can be handled, the objective functions that can be dealt with, and the assumptions on the strength of the adversary. Furthermore, we resolve two open problems from the literature; namely, we present the first constant-factor constrained oblivious posted price mechanism for matroid constraints, and the first constant-factor algorithm for weighted stochastic probing with deadlines.Comment: 33 pages. To appear in SODA 201

    Centrality of Trees for Capacitated k-Center

    Full text link
    There is a large discrepancy in our understanding of uncapacitated and capacitated versions of network location problems. This is perhaps best illustrated by the classical k-center problem: there is a simple tight 2-approximation algorithm for the uncapacitated version whereas the first constant factor approximation algorithm for the general version with capacities was only recently obtained by using an intricate rounding algorithm that achieves an approximation guarantee in the hundreds. Our paper aims to bridge this discrepancy. For the capacitated k-center problem, we give a simple algorithm with a clean analysis that allows us to prove an approximation guarantee of 9. It uses the standard LP relaxation and comes close to settling the integrality gap (after necessary preprocessing), which is narrowed down to either 7, 8 or 9. The algorithm proceeds by first reducing to special tree instances, and then solves such instances optimally. Our concept of tree instances is quite versatile, and applies to natural variants of the capacitated k-center problem for which we also obtain improved algorithms. Finally, we give evidence to show that more powerful preprocessing could lead to better algorithms, by giving an approximation algorithm that beats the integrality gap for instances where all non-zero capacities are uniform.Comment: 21 pages, 2 figure
    corecore